Diffusion models have achieved state-of-the-art synthesis quality on visual and audio tasks, and recent works adapt them to textual data by diffusing on the embedding space. But the difference between the continuous data space and the embedding space raises challenges to the diffusion model, which have not been carefully explored. In this paper, we conduct systematic studies and analyze the challenges threefold. Firstly, the data distribution is learnable for embeddings, which may lead to the collapse of the loss function. Secondly, as the norm of embedding varies between popular and rare words, adding the same noise scale will lead to sub-optimal results. In addition, we find that noises sampled from a standard Gaussian distribution may distract the diffusion process. To solve the above challenges, we propose Difformer, a denoising diffusion probabilistic model based on Transformer, which consists of three techniques including utilizing an anchor loss function, a layer normalization module for embeddings, and a norm factor to the Gaussian noise. All techniques are complementary to each other and critical to boosting the model performance together. Experiments are conducted on benchmark datasets over two seminal text generation tasks including machine translation and text summarization. The results show that Difformer significantly outperforms the embedding diffusion baselines, while achieving competitive results with strong autoregressive baselines.
translated by 谷歌翻译
我们的目标是将denoisis扩散隐式模型(DDIM)扩展到一般扩散模型〜(DMS)。我们没有像原始DDIM论文那样构建非马尔科夫no噪声过程,而是从数值的角度研究了DDIM的机制。我们发现,在求解相应的随机微分方程时,可以通过使用分数的一些特定近似值来获得DDIM。我们提出了DDIM加速效应的解释,该解释还解释了确定性抽样方案的优势,而不是随机采样方案进行快速采样。在此洞察力的基础上,我们将DDIM扩展到一般的DMS,并在参数化分数网络时进行了小而微妙的修改。当应用于批判性抑制的Langevin扩散模型时,最近提出的一种新型的扩散模型通过以速度增强扩散过程,我们的算法在CIFAR10上达到了2.28的FID分数,仅具有50个数量的得分功能评估(NFES)(NFES〜(NFES) )和仅有27个NFE的FID分数为2.87,比所有具有相同NFE的现有方法要好。代码可从https://github.com/qsh-zh/gddim获得
translated by 谷歌翻译
过去的几年见证了扩散模型〜(DMS)在生成建模任务中生成高保真样本方面取得的巨大成功。 DM的主要局限性是其臭名昭著的缓慢采样程序,通常需要数百到数千至数千个的时间离散步骤,以达到所需的准确性。我们的目标是为DMS开发快速采样方法,该方法的步骤少得多,同时保留了高样本质量。为此,我们系统地分析了DMS中的采样程序,并确定影响样本质量的关键因素,其中离散化方法至关重要。通过仔细检查学习的扩散过程,我们提出了扩散指数积分取样器〜(DEIS)。它基于设计用于离散的普通微分方程(ODE)的指数积分器,并利用学习扩散过程的半线性结构来减少离散误差。所提出的方法可以应用于任何DMS,并可以在短短10个步骤中生成高保真样本。在我们的实验中,一个A6000 GPU大约需要3分钟才能从CIFAR10产生$ 50K $的图像。此外,通过直接使用预训练的DMS,当得分函数评估的数量〜(NFE)的数量有限时,我们实现了最先进的采样性能,例如,使用10 NFES,3.37 FID和9.74的4.17 FID,仅为9.74 CIFAR10上的15个NFE。代码可从https://github.com/qsh-zh/deis获得
translated by 谷歌翻译
为了在商店中充分利用计算机视觉技术,需要考虑适合零售场景特征的实际需求。为了实现这一目标,我们介绍了联合零售数据集(Unitail),这是针对检测,阅读和匹配算法的产品的基本视觉任务的大规模基准。凭借注释的180万个四边形实例,该Unitail提供了一个检测数据集,以更好地对齐产品外观。此外,它提供了一个包含1454个产品类别,30k文本区域和21k转录的画廊风格的OCR数据集,以实现对产品的强大阅读并激励增强的产品匹配。除了使用各种最新技术对数据集进行基准测试外,我们还定制了一个新的检测器以进行产品检测,并提供了一个简单的基于OCR的匹配解决方案,以验证其有效性。
translated by 谷歌翻译
我们呈现路径积分采样器〜(PIS),一种新型算法,用于从非正规化概率密度函数中绘制样本。 PIS建立在SCHR \“odinger桥问题上,旨在恢复鉴于其初始分布和终端分布的扩散过程的最可能演变。PIS从初始分布中抽取样品,然后通过SCHR \”传播样本“少剂桥到达终端分布。应用Girsanov定理,通过简单的先前扩散,我们将PIS制定为随机最佳控制问题,其运行成本是根据目标分布选择控制能量和终端成本。通过将控件建模为神经网络,我们建立了一种可以训练结束到底的采样算法。在使用子最优控制时,我们在Wassersein距离方面提供了PIS的采样质量的理论典范。此外,路径积分理论用于计算样本的重要性权重,以补偿由控制器的次级最优性和时间离散化引起的偏差。我们通过关于各种任务的其他启动采样方法进行了实验证明了PIS的优势。
translated by 谷歌翻译
在本文中,我们提出了一种算法,用于估计聚合观测的时间均匀隐马尔可夫模型的参数。当只有每次步骤的个人数量的人口级别计数时,都会出现此问题,从中寻求学习单个隐藏的马尔可夫模型。我们的算法是在期望 - 最大化和最近提出的聚合推理算法,池中信念传播的建立。与现有方法相比,诸如具有非线性信念传播的期望最大化,我们的算法表现出收敛保证。此外,当记录与单个单独的观察时,我们的学习框架自然地降低了标准的BAUM-Welch学习算法。我们进一步扩展了我们的学习算法以处理具有连续观察的HMM。我们的算法的功效在各种数据集上进行了演示。
translated by 谷歌翻译
Domain generalization (DG) is the challenging and topical problem of learning models that generalize to novel testing domains with different statistics than a set of known training domains. The simple approach of aggregating data from all source domains and training a single deep neural network end-to-end on all the data provides a surprisingly strong baseline that surpasses many prior published methods. In this paper we build on this strong baseline by designing an episodic training procedure that trains a single deep network in a way that exposes it to the domain shift that characterises a novel domain at runtime. Specifically, we decompose a deep network into feature extractor and classifier components, and then train each component by simulating it interacting with a partner who is badly tuned for the current domain. This makes both components more robust, ultimately leading to our networks producing state-of-the-art performance on three DG benchmarks. Furthermore, we consider the pervasive workflow of using an ImageNet trained CNN as a fixed feature extractor for downstream recognition tasks. Using the Visual Decathlon benchmark, we demonstrate that our episodic-DG training improves the performance of such a general purpose feature extractor by explicitly training a feature for robustness to novel problems. This shows that DG training can benefit standard practice in computer vision.
translated by 谷歌翻译
We present a conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each. Our method, called the Relation Network (RN), is trained end-to-end from scratch. During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting. Once trained, a RN is able to classify images of new classes by computing relation scores between query images and the few examples of each new class without further updating the network. Besides providing improved performance on few-shot learning, our framework is easily extended to zero-shot learning. Extensive experiments on five benchmarks demonstrate that our simple approach provides a unified and effective approach for both of these two tasks.
translated by 谷歌翻译
生长免费的在线3D形状集合决定了3D检索的研究。然而,已经进行了积极的辩论(i)最佳输入方式是触发检索,以及(ii)这种检索的最终用法场景。在本文中,我们为回答这些问题提供了不同的观点 - 我们研究了3D草图作为输入方式,并提倡进行检索的VR-Scenario。因此,最终的愿景是用户可以通过在VR环境中自由空气供电来自由地检索3D模型。作为新的3D VR-Sketch的首次刺入3D形状检索问题,我们做出了四个贡献。首先,我们对VR实用程序进行编码以收集3D VR-Sketches并进行检索。其次,我们从ModelNet收集了两个形状类别的第一套$ 167 $ 3D VR-SKETCHES。第三,我们提出了一种新的方法,以生成不同抽象级别类似人类的3D草图的合成数据集,以训练深层网络。最后,我们比较了常见的多视图和体积方法:我们表明,与3D形状到3D形状检索相比,基于体积点的方法在3D草图上表现出卓越的性能,并且由于稀疏和抽象的性质而显示出3D形状的检索3D VR-Sketches。我们认为,这些贡献将集体成为未来在此问题的尝试的推动者。 VR接口,代码和数据集可在https://tinyurl.com/3dsketch3dv上找到。
translated by 谷歌翻译
我们介绍了1,497个3D VR草图和具有较大形状多样性的椅子类别的3D形状对的第一个细粒数据集。我们的数据集支持草图社区的最新趋势,以细粒度的数据分析,并将其扩展到主动开发的3D域。我们争辩说最方便的草图场景,其中草图由稀疏的线条组成,并且不需要任何草图技能,事先培训或耗时的准确绘图。然后,我们首次将细粒度3D VR草图的场景研究为3D形状检索,作为一种新颖的VR素描应用程序和一个探索基础,以推动通用见解以告知未来的研究。通过实验在这个新问题上精心选择的设计因素组合,我们得出重要的结论以帮助跟进工作。我们希望我们的数据集能够启用其他新颖的应用程序,尤其是那些需要细粒角的应用程序,例如细粒度的3D形状重建。该数据集可在tinyurl.com/vrsketch3dv21上获得。
translated by 谷歌翻译